The search and discoverability of XR
WebXR feels like '3D web'-history repeating, however this time we walk GPU's in our gear.
However, how can people easily discover the wonderful 3d content which flows thru these magic chips?
Centralized online stores alone just aren't going to cut it, we know this, we feel this.
Can we find important clues in the history of the web, and could it apply to WebXR too?
Obviously, this is a very complex and loaded question.
The matrix below, triggers some interesting thoughts:
- * Aframe and JanusVR are the only projects which promote <a-link> traversal (cross-domain navigation using anchor-tags) in XR
- * It seems that WebXR in combination with Aframe is interesting from a semantical linking point of view
- * W3C, Google & Firefox are preventing cross-domain WebXR sessions.
- * Sadly for WebXR, they seem to superimpose all the security-risks of the 2D web to WebXR: a theoretical chicken-egg dilemma.
- * This also demotivates the usage of public, semantically linked assets for proprietary players
- * Developers might prefer Aframe (html) markup over 3D CSS for spatial content
- * When html-markup is not a startingpoint, semantical linking might become trickier later on.
- * WebXR/WebGL/WebGPU might be history repeating itself: first we overcomplicate things (and later a markup language emerges)
The matrix applies scores and criteria's (vertical) to different 2D/3D webtechnologies (horizontal).
The vertical bars represent the total score:
Importance |
2D HTML
38
|
3D CSS
34
|
WebXR markup (aframe)
36
|
WebXR code (exported/threejs)
23
|
Native XR (Unity/Unreal)
16
|
OpenXR
15
|
||
1. semantical linking
|
3x |
3
|
3
|
3
|
1
|
1
|
1
|
|
2. indexable by searchengines
|
2x |
3
|
3
|
2
|
1
|
1
|
1
|
|
3. voice input
|
1x |
2
|
2
|
2
|
2
|
2
|
2
|
|
4. textual input
|
1x |
3
|
3
|
3
|
2
|
2
|
2
|
|
5. cheap / easy to develop
|
2x |
3
|
1
|
3
|
1
|
1
|
1
|
|
6. easy to share
|
3x |
3
|
3
|
3
|
3
|
1
|
1
|
|
7. public semantical linkable assets
|
1x |
3
|
3
|
3
|
3
|
2
|
1
|